2 research outputs found

    Motion Correction for Separate Mandibular and Cranial Movements in Cone Beam CT Reconstructions.

    Get PDF
    BACKGROUND Patient motions are a repeatedly reported phenomenon in oral and maxillofacial cone beam CT scans, leading to reconstructions of limited usability. In certain cases, independent movements of the mandible induce unpredictable motion patterns. Previous motion correction methods are not able to handle such complex cases of patient movements. PURPOSE Our goal was to design a combined motion estimation and motion correction approach for separate cranial and mandibular motions, solely based on the 2D projection images from a single scan. METHODS Our iterative three-step motion correction algorithm models the two articulated motions as independent rigid motions. First of all, we segment cranium and mandible in the projection images using a deep neural network. Next, we compute a 3D reconstruction with the poses of the object's trajectories fixed. Third, we improve all poses by minimizing the projection error while keeping the reconstruction fixed. Step two and three are repeated alternately. RESULTS We find that our marker-free approach delivers reconstructions of up to 85% higher quality, with respect to the projection error, and can improve on already existing techniques, which model only a single rigid motion. We show results of both synthetic and real data created in different scenarios. The reconstruction of motion parameters in a real environment was evaluated on acquisitions of a skull mounted on a hexapod, creating a realistic, easily reproducible motion profile. CONCLUSIONS The proposed algorithm consistently enhances the visual quality of motion impaired CBCT scans, thus eliminating the need for a re-scan in certain cases, considerably lowering radiation dosage for the patient. It can flexibly be used with differently sized regions of interest and is even applicable to local tomography. This article is protected by copyright. All rights reserved

    Cost-driven framework for progressive compression of textured meshes

    Get PDF
    International audienceRecent advances in digitization of geometry and radiometry generate in routine massive amounts of surface meshes with texture or color attributes. This large amount of data can be compressed using a progressive approach which provides at decoding low complexity levels of details (LoDs) that are continuously refined until retrieving the original model. The goal of such a progressive mesh compression algorithm is to improve the overall quality of the transmission for the user, by optimizing the rate-distortion trade-off. In this paper, we introduce a novel meaningful measure for the cost of a progressive transmission of a textured mesh by observing that the rate-distortion curve is in fact a staircase, which enables an effective comparison and optimization of progressive transmissions in the first place. We contribute a novel generic framework which utilizes the cost function to encode triangle surface meshes via multiplexing several geometry reduction steps (mesh decimation via half-edge or full-edge collapse operators, xyz quantization reduction and uv quantization reduction). This framework can also deal with textures by multiplexing an additional texture reduction step. We also design a texture atlas that enables us to preserve texture seams during decimation while not impairing the quality of resulting LODs. For encoding the inverse mesh decimation steps we further contribute a significant improvement over the state-of-the-art in terms of rate-distortion performance and yields a compression-rate of 22:1, on average. Finally, we propose a unique single-rate alternative solution using a selection scheme of a subset among LODs, optimized for our cost function, and provided with our atlas that enables interleaved progressive texture refinements
    corecore